Goto

Collaborating Authors

 position statement


Fine-tuninglanguagemodelstofindagreementamong humanswithdiversepreferences Appendix

Neural Information Processing Systems

We refer to Table S2 for example questions from each a subset of clusters. Each participant first read the task instructions (see Figure S2), and completed a short comprehension test. The comprehension check was designed to test the participants' knowledge and understanding of key aspectsoftheexperiment. Once all players had joined, the group started the main experiment. In practice, data was collected in batches of around 20 groups (100 participants) in parallel.



ORCHID: A Chinese Debate Corpus for Target-Independent Stance Detection and Argumentative Dialogue Summarization

Zhao, Xiutian, Wang, Ke, Peng, Wei

arXiv.org Artificial Intelligence

Dialogue agents have been receiving increasing attention for years, and this trend has been further boosted by the recent progress of large language models (LLMs). Stance detection and dialogue summarization are two core tasks of dialogue agents in application scenarios that involve argumentative dialogues. However, research on these tasks is limited by the insufficiency of public datasets, especially for non-English languages. To address this language resource gap in Chinese, we present ORCHID (Oral Chinese Debate), the first Chinese dataset for benchmarking target-independent stance detection and debate summarization. Our dataset consists of 1,218 real-world debates that were conducted in Chinese on 476 unique topics, containing 2,436 stance-specific summaries and 14,133 fully annotated utterances. Besides providing a versatile testbed for future research, we also conduct an empirical study on the dataset and propose an integrated task. The results show the challenging nature of the dataset and suggest a potential of incorporating stance detection in summarization for argumentative dialogue.


A University Framework for the Responsible use of Generative AI in Research

Smith, Shannon, Tate, Melissa, Freeman, Keri, Walsh, Anne, Ballsun-Stanton, Brian, Hooper, Mark, Lane, Murray

arXiv.org Artificial Intelligence

Generative Artificial Intelligence (generative AI) poses both opportunities and risks for the integrity of research. Universities must guide researchers in using generative AI responsibly, and in navigating a complex regulatory landscape subject to rapid change. By drawing on the experiences of two Australian universities, we propose a framework to help institutions promote and facilitate the responsible use of generative AI. We provide guidance to help distil the diverse regulatory environment into a principles-based position statement. Further, we explain how a position statement can then serve as a foundation for initiatives in training, communications, infrastructure, and process change. Despite the growing body of literature about AI's impact on academic integrity for undergraduate students, there has been comparatively little attention on the impacts of generative AI for research integrity, and the vital role of institutions in helping to address those challenges. This paper underscores the urgency for research institutions to take action in this area and suggests a practical and adaptable framework for so doing.


Fine-tuning language models to find agreement among humans with diverse preferences

Bakker, Michiel A., Chadwick, Martin J., Sheahan, Hannah R., Tessler, Michael Henry, Campbell-Gillingham, Lucy, Balaguer, Jan, McAleese, Nat, Glaese, Amelia, Aslanides, John, Botvinick, Matthew M., Summerfield, Christopher

arXiv.org Artificial Intelligence

Recent work in large language modeling (LLMs) has used fine-tuning to align outputs with the preferences of a prototypical user. This work assumes that human preferences are static and homogeneous across individuals, so that aligning to a a single "generic" user will confer more general alignment. Here, we embrace the heterogeneity of human preferences to consider a different challenge: how might a machine help people with diverse views find agreement? We fine-tune a 70 billion parameter LLM to generate statements that maximize the expected approval for a group of people with potentially diverse opinions. Human participants provide written opinions on thousands of questions touching on moral and political issues (e.g., "should we raise taxes on the rich?"), and rate the LLM's generated candidate consensus statements for agreement and quality. A reward model is then trained to predict individual preferences, enabling it to quantify and rank consensus statements in terms of their appeal to the overall group, defined according to different aggregation (social welfare) functions. The model produces consensus statements that are preferred by human users over those from prompted LLMs (>70%) and significantly outperforms a tight fine-tuned baseline that lacks the final ranking step. Further, our best model's consensus statements are preferred over the best human-generated opinions (>65%). We find that when we silently constructed consensus statements from only a subset of group members, those who were excluded were more likely to dissent, revealing the sensitivity of the consensus to individual contributions. These results highlight the potential to use LLMs to help groups of humans align their values with one another.


Dark truth behind Jacinda 'smoking' video

#artificialintelligence

When a video purporting to show New Zealand Prime Minister Jacinda Ardern smoking drugs surfaced on social media in recent months, experts quickly dismissed it as a fake. The video, which was viewed and shared thousands of times, showed a woman smoking from what appeared to be a crack pipe. The PM's face had been superimposed using artificial intelligence. But the video, created for YouTube, was convincing enough to the many who shared it. It was the latest example of how disturbingly authentic-looking videos can blur the lines between reality and fantasy.



Artificial Intelligence Could Help Curb Sleep Disorders

#artificialintelligence

Artificial intelligence has already proved its potential in diverse areas, with performing tedious, mundane tasks in a complex environment, enabling businesses to drive efficiency and more. Today's routine lives are totally impacted by the technology as it provides people a different capability to do their works. AI even offers healthcare professionals the ability to perform crucial treatment with ease. Now the technology could be leveraged to improve efficiencies and precision in sleep disorder treatment, resulting in more improved care and better patient outcomes, according to the American Academy of Sleep Medicine's (AASM) new position statement. Developed by AASM's Artificial Intelligence in Sleep Medicine Committee and published in the Journal of Clinical Sleep Medicine, the position statement noted that the electrophysiological data collected during polysomnography – the most comprehensive study on sleep – is well-positioned for enhanced analysis with AI and machine learning.


Artificial intelligence could enhance diagnosis and treatment of sleep disorders

#artificialintelligence

Artificial intelligence has the potential to improve efficiencies and precision in sleep medicine, resulting in more patient-centered care and better outcomes, according to a new position statement from the American Academy of Sleep Medicine. Published online as an accepted paper in the Journal of Clinical Sleep Medicine, the position statement was developed by the AASM's Artificial Intelligence in Sleep Medicine Committee. According to the statement, the electrophysiological data collected during polysomnography--the most comprehensive type of sleep study--is well-positioned for enhanced analysis through AI and machine-assisted learning. "When we typically think of AI in sleep medicine, the obvious use case is for the scoring of sleep and associated events," said lead author and committee Chair Dr. Cathy Goldstein, associate professor of sleep medicine and neurology at the University of Michigan. "This would streamline the processes of sleep laboratories and free up sleep technologist time for direct patient care."


A Computational Cognitive Model of Mirroring Processes: A Position Statement

Vered, Mor (Bar Ilan University) | Kamink, Gal (Bar Ilan University)

AAAI Conferences

In order to fully utilize robots for our benefit and design better agents that can collaborate smoothly and naturally with humans we need to understand how humans think. My goal is to understand the mirroring process and use that knowledge to build a computational cognitive model to enable a robot/agent to infer intentions and therefore collaborate more naturally in a human environment.